60 research outputs found

    On an adaptive regularization for ill-posed nonlinear systems and its trust-region implementation

    Full text link
    In this paper we address the stable numerical solution of nonlinear ill-posed systems by a trust-region method. We show that an appropriate choice of the trust-region radius gives rise to a procedure that has the potential to approach a solution of the unperturbed system. This regularizing property is shown theoretically and validated numerically.Comment: arXiv admin note: text overlap with arXiv:1410.278

    Global convergence enhancement of classical linesearch interior point methods for MCPs

    Get PDF
    AbstractRecent works have shown that a wide class of globally convergent interior point methods may manifest a weakness of convergence. Failures can be ascribed to the procedure of linesearch along the Newton step. In this paper, we introduce a globally convergent interior point method which performs backtracking along a piecewise linear path. Theoretical and computational results show the effectiveness of our proposal

    Inexact restoration with subsampled trust-region methods for finite-sum minimization

    Full text link
    Convex and nonconvex finite-sum minimization arises in many scientific computing and machine learning applications. Recently, first-order and second-order methods where objective functions, gradients and Hessians are approximated by randomly sampling components of the sum have received great attention. We propose a new trust-region method which employs suitable approximations of the objective function, gradient and Hessian built via random subsampling techniques. The choice of the sample size is deterministic and ruled by the inexact restoration approach. We discuss local and global properties for finding approximate first- and second-order optimal points and function evaluation complexity results. Numerical experience shows that the new procedure is more efficient, in terms of overall computational cost, than the standard trust-region scheme with subsampled Hessians

    A stochastic first-order trust-region method with inexact restoration for finite-sum minimization

    Full text link
    We propose a stochastic first-order trust-region method with inexact function and gradient evaluations for solving finite-sum minimization problems. At each iteration, the function and the gradient are approximated by sampling. The sample size in gradient approximations is smaller than the sample size in function approximations and the latter is determined using a deterministic rule inspired by the inexact restoration method, which allows the decrease of the sample size at some iterations. The trust-region step is then either accepted or rejected using a suitable merit function, which combines the function estimate with a measure of accuracy in the evaluation. We show that the proposed method eventually reaches full precision in evaluating the objective function and we provide a worst-case complexity result on the number of iterations required to achieve full precision. We validate the proposed algorithm on nonconvex binary classification problems showing good performance in terms of cost and accuracy and the important feature that a burdensome tuning of the parameters involved is not required
    corecore